Search Results: "kov"

4 November 2021

Sandro Tosi: Python: send emails with embedded images

to send emails with images you need to use MIMEMultipart, but the basic approach:
import smtplib

from email.mime.multipart import MIMEMultipart
from email.mime.image import MIMEImage

msg = MIMEMultipart('alternative')
msg['Subject'] = "subject"
msg['From'] = from_addr
msg['To'] = to_addr

part = MIMEImage(open('/path/to/image', 'rb').read())

s = smtplib.SMTP('localhost')
s.sendmail(from_addr, to_addr, msg.as_string())
s.quit()

will produce an email with empty body and the image as an attachment.The better way, ie to have the image as part of the body of the email, requires to write an HTML body that refers to that image:
import smtplib

from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage

msg = MIMEMultipart('alternative')
msg['Subject'] = "subject
msg['From'] = from_addr
msg['To'] = to_addr

text = MIMEText('<img src="cid:image1">', 'html')
msg.attach(text)

image = MIMEImage(open('/path/to/image', 'rb').read())

# Define the image's ID as referenced in the HTML body above
image.add_header('Content-ID', '<image1>')
msg.attach(image)

s = smtplib.SMTP('localhost')
s.sendmail(from_addr, to_addr, msg.as_string())
s.quit()

The trick is to define an image with a specific Content-ID and make that the only item in an HTML body: now you have an email with contains that specific image as the only content of the body, embedded in it.

Bonus point: if you want to take a snapshot of a webpage (which is kinda the reason i needed the code above) i found it extremely useful to use the Google PageSpeed Insights API; a good description on how to use that API with Python is available at this StackOverflow answer.

UPDATE (2020-12-26): I was made aware via email that some mail providers may not display images inline when the Content-ID value is too short (say, for example, Content-ID: 1). A solution that seems to work on most of the providers is using a sequence of random chars prefixed with a dot and suffixed with a valid mail domain.

30 October 2021

Dirk Eddelbuettel: dang 0.0.15: Small Correction

A bug-fix release of the dang package arrived at CRAN a little while ago. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor from the last release as well the checkCRANStatus() function tweeted about by Tim Taylor. This release corrects a small mistake wrapping extern "C" ... around code that is not actually C but C++ which g++ kept silent about yet clang++ complains about, correctly. So CRAN asked me to correct this, which this version does. The NEWS entry follows.

Changes in version 0.0.15 (2021-10-26)
  • Corrected scope of 'extern "C"' declaration to the actually C-callable function (noticed by clang++, change requested by CRAN)

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 October 2021

Arthur Diniz: Dropdown for GitHub workflows input parameters

Dropdown for GitHub workflows input parameters Sometimes when we look at CI/CD tools embedded within git-based software repository manager like GitHub, GitLab or Bitbucket, we ran into a lack of some features. This time me and my DevOps/SRE team were facing a pain of not being able to have the option to create drop-downs within GitHub workflows using input parameters. Although this functionality is already available on other platforms such as Bitbucket, the specific client we were working on stored the code inside GitHub. At first I thought that someone has already solved this problem somehow, but doing an extensive search on the internet I found several angry GitHub users opening requests within the Support Community and even in the stack overflow. comment-1 comment-2 comment-3 comment-5 comment-4 So I decided to create a solution for this, always thinking about simplicity and in a way that makes it easy to get this missing functionality. I started by creating an input array pattern using commas and using a tag (the selector) e.g brackets as the default value marker. Here is an example of what an input string would look like:
name: gh-action-dropdown-list-input
on:
  workflow_dispatch:
    inputs:
      environment:
        description: 'Environment'
        required: true
        default: 'dev,staging,[uat],prod'
Now the final question that would turn out to be the most complicated to deal with. How can I change the GitHub Actions interface to replace the input pattern we created earlier to a dropdown? The simplest answer I thought was to create a chrome and firefox extension that would do all this logic behind the scenes and replace the HTML input element with the selected tag containing the array values and leaving the tag value (selector) always as the default. All code was developed in pure JavaScript, open-source licensed under Apache 2.0 and available at https://github.com/arthurbdiniz/gh-action-dropdown-list-input.

Install extension Once installed, the extension is ready to use and the final result we see is the Actions interface with drop-downs. :) showcase-1 showcase-2

Configuring selectors Go to the top right corner of the browser you are using and click on the extension logo. A screen will popup with tag options. Choose the right tags for you and save it.
This action might require reloading the GitHub workflow tab. config

Have fun using drop-downs inside GitHub. If you liked this project please share this post and if possible star within the repository. Also feel free to connect with me on LinkedIn: https://www.linkedin.com/in/arthurbdiniz

References
  • https://github.community/t/can-workflow-dispatch-input-be-option-list/127338
  • https://stackoverflow.com/questions/69296314/dropdown-for-github-workflows-input-parameters

18 October 2021

Dirk Eddelbuettel: dang 0.0.14: Several Updates

A new release of the dang package arrived at CRAN a couple of hours ago, exactly eight months after the previous release. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor from the last release as well the checkCRANStatus() function recently tweeted about by Tim Taylor. This release regroups a few small edits to several functions, adds a sample function for character encoding reading and conversion using a library already used by R (hence look Ma, no new depends ), adds a weekday helper, and a sample usage (computing rolling min/max values) of a new simple vector class added to tidyCpp (and the function and class need to get another blog post or study ), and an experimental git sha1sum and date marker (as I am not the fan of autogenerated binaries from repos as opposed to marked released meaning: we may see different binary release with the same version number). The full NEWS entry follows.

Changes in version 0.0.14 (2021-10-17)
  • Updated continuous integration to run on Linux only.
  • Edited checkNonAscii.cpp for readability.
  • More robust title display in intradayMarketMonitor.R.
  • New C++-based function to read and convert encoding via the R-supplied iconv library, noted a potential variability.
  • New function wday returning day of the week as integer.
  • The signature to as.data.table was standardized.
  • A new function rollMinMax was added illustrating use of the NumVec class from tidyCpp.
  • The configure script can record the last commit date and sha1 to automate timestamping builds, but not activated in this release.
  • checkCRANStatus() now works correctly for single-package lookups (Jordan Mark Barbone in #4).

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

7 July 2021

Dirk Eddelbuettel: Rcpp 1.0.7: More Updates

rcpp logo The Rcpp team is pleased to announce release 1.0.7 of Rcpp which arrived at CRAN earlier today, and will be uploaded to Debian shortly. Windows and macOS builds should appear at CRAN in the next few days. This release continues with the six-months cycle started with release 1.0.5 last July. As a reminder, interim dev or rc releases will alwasys be available in the Rcpp drat repo; this cycle there were seven (!!). These rolling release tend to work just as well, and are also fully tested against all reverse-dependencies. Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2323 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 227 in BioConductor. This release contains a change which Luke Tierney urged us to make a good year ago in #1081) (and which we had looked at earlier in #382). Implementing the change in a regular update proved a little tricky, and my initial branch lay dormant until I aki revived it, and finished the transition (which we then did in two PRs). The change concerns how Rcpp grows internal objects, and the new approach (thanks to the hint by Luke) closer to what R does guaranteeing linear behaviour. It turns out that we overlooked one aspect (of coping with Modules built under earlier Rcpp releases) so the initial upload to CRAN on Saturday revealed that we needed a small adjustment that we made for the final release. This version should now be more performant, and rest on a stable API. Based on the reverse depends checks by both us and CRAN (using the updated version), we expect no issues with existing code. However, it something does act up a fresh compilation of the packages using Rcpp may help. We also made a few other minor changes in the API such as silencing a silly compiler warning, ensuring global Rcout and Rcerr objects, adding support for a new Rcpp::message() call, completing a switch to uint32_t instead of unsigned int and including the cfloat header (which relates to STRICT_R_HEADERS discussed below). Similarly, the Rcpp Attributes feature was enhanced by coping better with packages with a dot in their name and their for per-package include files, along with support for more quiet compilation if desired. As some Rcpp users may have seen, we plan to enable STRICT_R_HEADERS by the next release (expected in January 2022). A long issue tick #1158 is laying the ground work. Maintainers of 81 packages which are affected and need a small change (such as for example switching from PI to M_PI); of these 56 have already responded. We plan to be in touch in the fall. Adding the cfloat header is one help in this transition (as the corresponding C header was pulled in when STRICT_R_HEADERS is not defined) as it ensures DBL_EPSILON and alike are defined. Last but not least this is also the first relase in which we welcome I aki as a new member of the Rcpp Core team. Yay! The NEWS file entries follow summarizing the nine key PRs in this release.

Changes in Rcpp release version 1.0.7 (2021-07-06)
  • Changes in Rcpp API:
    • Refactored Rcpp_PreserveObject and Rcpp_ReleaseObject which are now O(1) (Dirk and I aki in #1133 and #1135 fixing #382 and #1081).
    • A spuriously assigned variable was removed (Dirk in #1138 fixing #1137).
    • Global Rcout and Rcerr objects are supported via a compiler directive (I aki in #1139 fixing #928)
    • Add support for Rcpp::message (Dirk in #1146 fixing #1145).
    • The uint32_t type is used throughout instead of unsigned int (Dirk in #1153 fixing #1152).
    • The cfloat header for floating point limits is now included (Dirk in #1162 fixing #1161).
  • Changes in Rcpp Attributes:
    • Packages with dots in their name can now have per-package include files (Dirk in #1132 fixing #1129).
    • New argument echo to quieten optional evaluation in sourceCpp (Dirk in #1138 fixing #1126).
  • Forthcoming Changes in Rcpp API:
    • Starting with Rcpp 1.0.8 anticipated in January 2022, STRICT_R_HEADERS will be enabled by default, see #1126.

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 June 2021

Enrico Zini: Pipelining

This is part of a series of posts on ideas for an ansible-like provisioning system, implemented in Transilience. Running actions on a server is nice, but a network round trip for each action is not very efficient. If I need to run a linear sequence of actions, I can stream them all to the server, and then read replies streamed from the server as they get executed. This technique is called pipelining and one can see it used, for example, in Redis, or Mitogen. Roles Ansible has the concept of "Roles" as a series of related tasks: I'll play with that. Here's an example role to install and setup fail2ban:
class Role(role.Role):
    def main(self):
        self.add(builtin.apt(
            name=["fail2ban"],
            state="present",
        ))
        self.add(builtin.copy(
            content=inline("""
                [postfix]
                enabled = true
                [dovecot]
                enabled = true
            """),
            dest="/etc/fail2ban/jail.local",
            owner="root",
            group="root",
            mode=0o644,
        ), name="configure fail2ban")
I prototyped roles as classes, with methods that push actions down the pipeline. If an action fails, all further actions for the same role won't executed, and will be marked as skipped. Since skipping is applied per-role, it means that I can blissfully stream actions for multiple roles to the server down the same pipe, and errors in one role will stop executing that role and not others. Potentially I can get multiple roles going with a single network round-trip:
#!/usr/bin/python3
import sys
from transilience.system import Mitogen
from transilience.runner import Runner
@Runner.cli
def main():
    system = Mitogen("my server", "ssh", hostname="server.example.org", username="root")
    runner = Runner(system)
    # Send roles to the server
    runner.add_role("general")
    runner.add_role("fail2ban")
    runner.add_role("prosody")
    # Run until all roles are done
    runner.main()
if __name__ == "__main__":
    sys.exit(main())
That looks like a playbook, using Python as glue rather than YAML. Decision making in roles Besides filing a series of actions, a role may need to take decisions based on the results of previous actions, or on facts discovered from the server. In that case, we need to wait until the results we need come back from the server, and then decide if we're done or if we want to send more actions down the pipe. Here's an example role that installs and configures Prosody:
from transilience import actions, role
from transilience.actions import builtin
from .handlers import RestartProsody
class Role(role.Role):
    """
    Set up prosody XMPP server
    """
    def main(self):
        self.add(actions.facts.Platform(), then=self.have_facts)
        self.add(builtin.apt(
            name=["certbot", "python-certbot-apache"],
            state="present",
        ), name="install support packages")
        self.add(builtin.apt(
            name=["prosody", "prosody-modules", "lua-sec", "lua-event", "lua-dbi-sqlite3"],
            state="present",
        ), name="install prosody packages")
    def have_facts(self, facts):
        facts = facts.facts  # Malkovich Malkovich Malkovich!
        domain = facts["domain"]
        ctx =  
            "ansible_domain": domain
         
        self.add(builtin.command(
            argv=["certbot", "certonly", "-d", f"chat. domain ", "-n", "--apache"],
            creates=f"/etc/letsencrypt/live/chat. domain /fullchain.pem"
        ), name="obtain chat certificate")
        with self.notify(RestartProsody):
            self.add(builtin.copy(
                content=self.template_engine.render_file("roles/prosody/templates/prosody.cfg.lua", ctx),
                dest="/etc/prosody/prosody.cfg.lua",
            ), name="write prosody configuration")
            self.add(builtin.copy(
                src="roles/prosody/templates/firewall-ruleset.pfw",
                dest="/etc/prosody/firewall-ruleset.pfw",
            ), name="write prosody firewall")
    # ...
This files some general actions down the pipe, with a hook that says: when the results of this action come back, run self.have_facts(). At that point, the role can use the results to build certbot command lines, render prosody's configuration from Jinja2 templates, and use the results to file further action down the pipe. Note that this way, while the server is potentially still busy installing prosody, we're already streaming prosody's configuration to it. If anything goes wrong with the installation of prosody's package, the role will be marked as failed and all further actions of the same role, even those filed by have_facts() will be skipped. Notify and handlers In the previous example self.notify() also appears: that's my attempt to model the equivalent of Ansible's handlers. If any of the actions inside the with produce changes, then the RestartProsody role will be executed, potentially filing more actions ad the end of the playbook. The runner will take care of collecting all the triggered role classes in a set, which discards duplicates, and then running the main() method of all resulting roles, which will cause more actions to be filed down the pipe. Action conditions Sometimes some actions are only meaningful as consequences of other actions. Let's take, for example, enabling buster-backports as an extra apt source:
        a = self.add(builtin.copy(
            owner="root",
            group="root",
            mode=0o644,
            dest="/etc/apt/sources.list.d/debian-buster-backports.list",
            content="deb [arch=amd64] https://mirrors.gandi.net/debian/ buster-backports main contrib",
        ), name="enable backports")
        self.add(builtin.apt(
            update_cache=True
        ), name="update after enabling backports",
           # Run only if the previous copy changed anything
           when= a: ResultState.CHANGED ,
        )
Here we want to update Apt's cache, which is a slow operation, only after we actually write /etc/apt/sources.list.d/debian-buster-backports.list. If the file was already there from a previous run, we can skip downloading the new package lists. The when= attributes adds an annotation to the action that is sent town the pipeline, that says that it should only be run if the state of a previous action matches the given one. In this case, when on the remote it's the turn of "update after enabling backports", it gets skipped unless the state of the previous "enable backports" action is CHANGED. Effects of pipelining I ported enough of Ansible's modules to be able to run the provisioning scripts of my VPS entirely via ansible. This is the playbook run as plain Ansible:
$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
real    2m10.072s
user    0m33.149s
sys 0m10.379s
This is the same playbook run with Ansible speeded up via the Mitogen backend, which makes Ansible more bearable:
$ export ANSIBLE_STRATEGY=mitogen_linear
$ time ansible-playbook vps.yaml
[...]
servername       : ok=55   changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
real    0m24.428s
user    0m8.479s
sys 0m1.894s
This is the same playbook ported to Transilience:
$ time ./provision
[...]
real    0m2.585s
user    0m0.659s
sys 0m0.034s
Doing nothing went from 2 minutes down to 3 seconds! That's the kind of running time that finally makes me comfortable with maintaining my VPS by editing the playbook only, and never logging in to mess with the system configuration by hand! Next steps I'm quite happy with what I have: I can now maintain my VPS with a simple script with quick iterative cycles. I might use it to develop new playbooks, and port them to ansible only when they're tested and need to be shared with infrastructure that needs to rely on something more solid and battle tested than a prototype provisioning system. I might also keep working on it as I have more interesting ideas that I'd like to try. I feel like Ansible reached some architectural limits that are hard to overcome without a major redesign, and are in many way hardcoded in its playbook configuration. It's nice to be able to try out new designs without that baggage. I'd love it if even just the library of Transilience actions could grow, and gain widespread use. Ansible modules standardized a set of management operations, that I think became the way people think about system management, and should really be broadly available outside of Ansible. If you are interesting in playing with Transilience, such as: do get in touch or send a pull request! :) Next step: Reimagining Ansible variables.

21 April 2021

Enrico Zini: Python output buffering

Here's a little toy program that displays a message like a split-flap display:
#!/usr/bin/python3
import sys
import time
def display(line: str):
    cur = '0' * len(line)
    while True:
        print(cur, end="\r")
        if cur == line:
            break
        time.sleep(0.09)
        cur = "".join(chr(min(ord(c) + 1, ord(oc))) for c, oc in zip(cur, line))
    print()
message = " ".join(sys.argv[1:])
display(message.upper())
This only works if the script's stdout is unbuffered. Pipe the output through cat, and you get a long wait, and then the final string, without the animation. What is happening is that since the output is not going to a terminal, optimizations kick in that buffer the output and send it in bigger chunks, to make processing bulk I/O more efficient. I haven't found a good introductory explanation of buffering in Python's documentation. The details seem to be scattered in the io module documentation and they mostly assume that one is already familiar with concepts like unbuffered, line-buffered or block-buffered. The libc documentation has a good quick introduction that one can read to get up to speed. Controlling buffering in Python In Python, one can force a buffer flush with the flush() method of the output file descriptor, like sys.stdout.flush(), to make sure pending buffered output gets sent. Python's print() function also supports flush=True as an optional argument:
    print(cur, end="\r", flush=True)
If one wants to change the default buffering for a file descriptor, since Python 3.7 there's a convenient reconfigure() method, which can reconfigure line buffering only:
sys.stdout.reconfigure(line_buffering=True)
Otherwise, the technique is to reassign sys.stdout to something that has the behaviour one wants (code from this StackOverflow thread):
import io
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)
If one needs all this to implement a progressbar, one should make sure to have a look at the progressbar module first.

5 April 2021

Kees Cook: security things in Linux v5.9

Previously: v5.8 Linux v5.9 was released in October, 2020. Here s my summary of various security things that I found interesting: seccomp user_notif file descriptor injection
Sargun Dhillon added the ability for SECCOMP_RET_USER_NOTIF filters to inject file descriptors into the target process using SECCOMP_IOCTL_NOTIF_ADDFD. This lets container managers fully emulate syscalls like open() and connect(), where an actual file descriptor is expected to be available after a successful syscall. In the process I fixed a couple bugs and refactored the file descriptor receiving code. zero-initialize stack variables with Clang
When Alexander Potapenko landed support for Clang s automatic variable initialization, it did so with a byte pattern designed to really stand out in kernel crashes. Now he s added support for doing zero initialization via CONFIG_INIT_STACK_ALL_ZERO, which besides actually being faster, has a few behavior benefits as well. Unlike pattern initialization, which has a higher chance of triggering existing bugs, zero initialization provides safe defaults for strings, pointers, indexes, and sizes. Like the pattern initialization, this feature stops entire classes of uninitialized stack variable flaws. common syscall entry/exit routines
Thomas Gleixner created architecture-independent code to do syscall entry/exit, since much of the kernel s work during a syscall entry and exit is the same. There was no need to repeat this in each architecture, and having it implemented separately meant bugs (or features) might only get fixed (or implemented) in a handful of architectures. It means that features like seccomp become much easier to build since it wouldn t need per-architecture implementations any more. Presently only x86 has switched over to the common routines. SLAB kfree() hardening
To reach CONFIG_SLAB_FREELIST_HARDENED feature-parity with the SLUB heap allocator, I added naive double-free detection and the ability to detect cross-cache freeing in the SLAB allocator. This should keep a class of type-confusion bugs from biting kernels using SLAB. (Most distro kernels use SLUB, but some smaller devices prefer the slightly more compact SLAB, so this hardening is mostly aimed at those systems.) new CAP_CHECKPOINT_RESTORE capability
Adrian Reber added the new CAP_CHECKPOINT_RESTORE capability, splitting this functionality off of CAP_SYS_ADMIN. The needs for the kernel to correctly checkpoint and restore a process (e.g. used to move processes between containers) continues to grow, and it became clear that the security implications were lower than those of CAP_SYS_ADMIN yet distinct from other capabilities. Using this capability is now the preferred method for doing things like changing /proc/self/exe. debugfs boot-time visibility restriction
Peter Enderborg added the debugfs boot parameter to control the visibility of the kernel s debug filesystem. The contents of debugfs continue to be a common area of sensitive information being exposed to attackers. While this was effectively possible by unsetting CONFIG_DEBUG_FS, that wasn t a great approach for system builders needing a single set of kernel configs (e.g. a distro kernel), so now it can be disabled at boot time. more seccomp architecture support
Michael Karcher implemented the SuperH seccomp hooks, Guo Ren implemented the C-SKY seccomp hooks, and Max Filippov implemented the xtensa seccomp hooks. Each of these included the ever-important updates to the seccomp regression testing suite in the kernel selftests. stack protector support for RISC-V
Guo Ren implemented -fstack-protector (and -fstack-protector-strong) support for RISC-V. This is the initial global-canary support while the patches to GCC to support per-task canaries is getting finished (similar to the per-task canaries done for arm64). This will mean nearly all stack frame write overflows are no longer useful to attackers on this architecture. It s nice to see this finally land for RISC-V, which is quickly approaching architecture feature parity with the other major architectures in the kernel. new tasklet API
Romain Perier and Allen Pais introduced a new tasklet API to make their use safer. Much like the timer_list refactoring work done earlier, the tasklet API is also a potential source of simple function-pointer-and-first-argument controlled exploits via linear heap overwrites. It s a smaller attack surface since it s used much less in the kernel, but it is the same weak design, making it a sensible thing to replace. While the use of the tasklet API is considered deprecated (replaced by threaded IRQs), it s not always a simple mechanical refactoring, so the old API still needs refactoring (since that CAN be done mechanically is most cases). x86 FSGSBASE implementation
Sasha Levin, Andy Lutomirski, Chang S. Bae, Andi Kleen, Tony Luck, Thomas Gleixner, and others landed the long-awaited FSGSBASE series. This provides task switching performance improvements while keeping the kernel safe from modules accidentally (or maliciously) trying to use the features directly (which exposed an unprivileged direct kernel access hole). filter x86 MSR writes
While it s been long understood that writing to CPU Model-Specific Registers (MSRs) from userspace was a bad idea, it has been left enabled for things like MSR_IA32_ENERGY_PERF_BIAS. Boris Petkov has decided enough is enough and has now enabled logging and kernel tainting (TAINT_CPU_OUT_OF_SPEC) by default and a way to disable MSR writes at runtime. (However, since this is controlled by a normal module parameter and the root user can just turn writes back on, I continue to recommend that people build with CONFIG_X86_MSR=n.) The expectation is that userspace MSR writes will be entirely removed in future kernels. uninitialized_var() macro removed
I made treewide changes to remove the uninitialized_var() macro, which had been used to silence compiler warnings. The rationale for this macro was weak to begin with ( the compiler is reporting an uninitialized variable that is clearly initialized ) since it was mainly papering over compiler bugs. However, it creates a much more fragile situation in the kernel since now such uses can actually disable automatic stack variable initialization, as well as mask legitimate unused variable warnings. The proper solution is to just initialize variables the compiler warns about. function pointer cast removals
Oscar Carter has started removing function pointer casts from the kernel, in an effort to allow the kernel to build with -Wcast-function-type. The future use of Control Flow Integrity checking (which does validation of function prototypes matching between the caller and the target) tends not to work well with function casts, so it d be nice to get rid of these before CFI lands. flexible array conversions
As part of Gustavo A. R. Silva s on-going work to replace zero-length and one-element arrays with flexible arrays, he has documented the details of the flexible array conversions, and the various helpers to be used in kernel code. Every commit gets the kernel closer to building with -Warray-bounds, which catches a lot of potential buffer overflows at compile time. That s it for now! Please let me know if you think anything else needs some attention. Next up is Linux v5.10.

2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

26 March 2021

Daniel Lange: The Stallman wars

So, 2021 isn't bad enough yet, but don't despair, people are working to fix that:

Welcome to the Stallman wars Team Cancel: https://rms-open-letter.github.io/ (repo) Team Support: https://rms-support-letter.github.io/ (repo) Current stats are:

Team Cancel:  3028 signers from 1413 individual commit authors
Team Support: 6249 signers from 5018 individual commit authors
Git shortlog (Top 10):
rms_cancel.git (Last update: 2021-04-07 15:42:33 (UTC))
  1228  Neil McGovern
   251  Joan Touzet
    86  Elana Hashman
    71  Molly de Blanc
    36  Shauna
    19  Juke
    18  Stefano Zacchiroli
    17  Alexey Mirages
    16  Devin Halladay
    14  Nader Jafari
rms_support.git (Last update: 2021-04-12 09:25:53 (UTC))
  1678  shenlebantongying
  1564  nukeop
  1550  Ivanq
   826  Victor
   746  Job Bautista
   123  nekonee
    61  Victor Gridnevsky
    38  Patrick Spek
    25  Borys Kabakov
    17  KIM Taeyeob
(last updated 2021-04-12 09:26:15 (UTC)) Technical info:
Signers are counted from their "Signed / Individuals" sections. Commits are counted with git shortlog -s.
Team Cancel also has organizational signatures with Mozilla, Suse and X.Org being among the notable signatories. Debian is in the process of running a GR to join (or not join) that list. The 16 original signers of the Cancel petition are added in their count. Neil McGovern, Juke and shenlebantongying need .mailmap support as they have committed with different names. Further reading:

21 February 2021

Enrico Zini: Software development links

Next time we'll iterate on Himblick design and development, Raspberry Pi 4 can now run plain standard Debian, which should make a lot of things easier and cleaner when developing products based on it. Somewhat related to nspawn-runner, random links somehow related to my feeling that nspawn comes from an ecosystem which gives me a bigger sense of focus on security and solidity than Docker: I did a lot of work on A38, a Python library to deal with FatturaPA electronic invoicing, and it was a wonderful surprise to see a positive review spontaneously appear! : Fattura elettronica, come visualizzarla con python TuttoLogico A beautiful, hands-on explanation of git internals, as a step by step guide to reimplementing your own git: Git Internals - Learn by Building Your Own Git I recently tried meson and liked it a lot. I then gave unity builds a try, since it supports them out of the box, and found myself with doubts. I found I wasn't alone, and I liked The Evils of Unity Builds as a summary of the situation. A point of view I liked on technological debt: Technical debt as a lack of understanding Finally, a classic, and a masterful explanation for a question that keeps popping up: RegEx match open tags except XHTML self-contained tags

18 February 2021

Dirk Eddelbuettel: dang 0.0.13: New intradayMarketMonitor

sp500 intraday monitor A new release of the dang package got to CRAN earlier today, a few months since the last relase. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. This release adds one function I tweeted about one month ago. It takes a function Josh Ulrich originally tweeted about in November with a reference to this gist. I refactored this into a proper functions and polished a few edges: the data now properly rolls off after a fixed delay (of two days), should work with other symbols (though we both focused on ^GSPC as a free (!!) real-time SP500 index (albeit only during trading hours), properly gaps between trading days and more. You can simply invoke it via
dang::intradayMarketMonitor()
and a chart just like the one here will grow (though there is no state : if you stop it, or reboot, or the plot starts from scratch). The short NEWS entry follows.

Changes in version 0.0.13 (2021-02-17)
  • New function intradayMarketMonitor based on an earlier gist-posted snippet by Josh Ulrich.
  • The CI setup was generalized as a test for 'r-ci' and is used essentially unchanged with three different providers.

Courtesy of my CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 February 2021

Louis-Philippe V ronneau: What are the incentive structures of Free Software?

When I started my Master's degree in January 2018, I was confident I would be done in a year and half. After all, I only had one year of classes and I figured 6 months to write a thesis would be plenty. Three years later, I'm finally done: the final version of my thesis was accepted on January 22nd 2021. My thesis, entitled What are the incentive structures of Free Software? An economic analysis of Free Software's specific development model, can be found here 1. If you care about such things, both the data and the final document can be built from source with the code in this git repository. Results and analysis My thesis is divided in four main sections:
  1. an introduction to FOSS
  2. a chapter discussing the incentive structures of Free Software (and arguing the so called Tragedy of the Commons isn't inevitable)
  3. a chapter trying to use empirical data to validate the theories presented in the previous chapter
  4. an annex on the various FOSS business models
If you're reading this blog post, chances are you'll find both section 1 and 4 a tad boring, as you might already be familiar with these concepts. Incentives So, why do people contribute to Free Software? Unsurprisingly, it's complicated. Many economists have studied this topic, but for some reason, most research happened in the early 2000s. Although papers don't all agree with each other and most importantly, about the variables' importance, the main incentives2 can be summarized by:
  • expectation of monetary gain
  • writing FOSS as a hobby (that includes scratching your own itch )
  • liking the FOSS community and feeling a sense of belonging
  • altruism (writing FOSS for Good )
Giving weights to these variables is not an easy thing: the FOSS ecosystem is highly heterogeneous and thus, people tend to write FOSS for different reasons. Moreover, incentives tend to shift with time as the ecosystem does. People writing Free Software in the 1990s probably did it for different reasons than people in 2021. These four variables can also be divided in two general categories: extrinsic and intrinsic incentives. Monetary gain expectancy is an extrinsic incentive (its value is delayed and mediated), whereas the three other ones are intrinsic (they have an immediate value by themselves). Empirical analysis Theory is nice, but it's even better when you can back it up with data. Sadly, most of the papers on the economic incentives of FOSS are either purely theoretical, or use sample sizes so small they could as well be. Using the data from the StackOverflow 2018 survey, I thus tried to see if I could somehow confirm my previous assumptions. With 129 questions and more than 100 000 respondents (which after statistical processing yields between 28 000 and 39 000 observations per variable of interest), the StackOverflow 2018 survey is a very large dataset compared to what economists are used to work with. Sadly, it wasn't entirely enough to come up with hard answers. There is a strong and significant correlation between writing Free Software and having a higher salary, but endogeneity problems3 made it hard to give a reliable estimate of how much money this would represent. Same goes for writing code has a hobby: it seems there is a strong and significant correlation, but the exact numbers I came up with cannot really be trusted. The results on community as an incentive to writing FOSS were the ones that surprised me the most. Although I expected the relation to be quite strong, the coefficients predicted were in fact quite small. I theorise this is partly due to only 8% of the respondents declaring they didn't feel like they belonged in the IT community. With such a high level of adherence, the margin for improvement has to be smaller. As for altruism, I wasn't able get any meaningful results. In my opinion this is mostly due to the fact there was no explicit survey question on this topic and I tried to make up for it by cobbling data together. Kinda anti-climatic, isn't it? I would've loved to come up with decisive conclusions on this topic, but if there's one thing I learned while writing this thesis, it is I don't know much after all.

  1. Note that the thesis is written in French.
  2. Of course, life is complex and so are people's motivations. One could come up with dozen more reasons why people contribute to Free Software. The "fun" of theoretical modelisation is trying to make complex things somewhat simpler.
  3. I'll spare you the details, but this means there is no way to know if this correlation is the result of a causal link between the two variables. There are ways to deal with this problem (using an instrumental variables model is a very popular one), but again, the survey didn't provide the proper instruments to do so. For example, it could very well be the correlation is due to omitted variables. If you are interested in this topic (and can read French), I talk about this issue in section 3.2.8.

9 February 2021

Kees Cook: security things in Linux v5.8

Previously: v5.7 Linux v5.8 was released in August, 2020. Here s my summary of various security things that caught my attention: arm64 Branch Target Identification
Dave Martin added support for ARMv8.5 s Branch Target Instructions (BTI), which are enabled in userspace at execve() time, and all the time in the kernel (which required manually marking up a lot of non-C code, like assembly and JIT code). With this in place, Jump-Oriented Programming (JOP, where code gadgets are chained together with jumps and calls) is no longer available to the attacker. An attacker s code must make direct function calls. This basically reduces the usable code available to an attacker from every word in the kernel text to only function entries (or jump targets). This is a low granularity forward-edge Control Flow Integrity (CFI) feature, which is important (since it greatly reduces the potential targets that can be used in an attack) and cheap (implemented in hardware). It s a good first step to strong CFI, but (as we ve seen with things like CFG) it isn t usually strong enough to stop a motivated attacker. High granularity CFI (which uses a more specific branch-target characteristic, like function prototypes, to track expected call sites) is not yet a hardware supported feature, but the software version will be coming in the future by way of Clang s CFI implementation. arm64 Shadow Call Stack
Sami Tolvanen landed the kernel implementation of Clang s Shadow Call Stack (SCS), which protects the kernel against Return-Oriented Programming (ROP) attacks (where code gadgets are chained together with returns). This backward-edge CFI protection is implemented by keeping a second dedicated stack pointer register (x18) and keeping a copy of the return addresses stored in a separate shadow stack . In this way, manipulating the regular stack s return addresses will have no effect. (And since a copy of the return address continues to live in the regular stack, no changes are needed for back trace dumps, etc.) It s worth noting that unlike BTI (which is hardware based), this is a software defense that relies on the location of the Shadow Stack (i.e. the value of x18) staying secret, since the memory could be written to directly. Intel s hardware ROP defense (CET) uses a hardware shadow stack that isn t directly writable. ARM s hardware defense against ROP is PAC (which is actually designed as an arbitrary CFI defense it can be used for forward-edge too), but that depends on having ARMv8.3 hardware. The expectation is that SCS will be used until PAC is available. Kernel Concurrency Sanitizer infrastructure added
Marco Elver landed support for the Kernel Concurrency Sanitizer, which is a new debugging infrastructure to find data races in the kernel, via CONFIG_KCSAN. This immediately found real bugs, with some fixes having already landed too. For more details, see the KCSAN documentation. new capabilities
Alexey Budankov added CAP_PERFMON, which is designed to allow access to perf(). The idea is that this capability gives a process access to only read aspects of the running kernel and system. No longer will access be needed through the much more powerful abilities of CAP_SYS_ADMIN, which has many ways to change kernel internals. This allows for a split between controls over the confidentiality (read access via CAP_PERFMON) of the kernel vs control over integrity (write access via CAP_SYS_ADMIN). Alexei Starovoitov added CAP_BPF, which is designed to separate BPF access from the all-powerful CAP_SYS_ADMIN. It is designed to be used in combination with CAP_PERFMON for tracing-like activities and CAP_NET_ADMIN for networking-related activities. For things that could change kernel integrity (i.e. write access), CAP_SYS_ADMIN is still required. network random number generator improvements
Willy Tarreau made the network code s random number generator less predictable. This will further frustrate any attacker s attempts to recover the state of the RNG externally, which might lead to the ability to hijack network sessions (by correctly guessing packet states). fix various kernel address exposures to non-CAP_SYSLOG
I fixed several situations where kernel addresses were still being exposed to unprivileged (i.e. non-CAP_SYSLOG) users, though usually only through odd corner cases. After refactoring how capabilities were being checked for files in /sys and /proc, the kernel modules sections, kprobes, and BPF exposures got fixed. (Though in doing so, I briefly made things much worse before getting it properly fixed. Yikes!) RISCV W^X detection
Following up on his recent work to enable strict kernel memory protections on RISCV, Zong Li has now added support for CONFIG_DEBUG_WX as seen for other architectures. Any writable and executable memory regions in the kernel (which are lovely targets for attackers) will be loudly noted at boot so they can get corrected. execve() refactoring continues
Eric W. Biederman continued working on execve() refactoring, including getting rid of the frequently problematic recursion used to locate binary handlers. I used the opportunity to dust off some old binfmt_script regression tests and get them into the kernel selftests. multiple /proc instances
Alexey Gladkov modernized /proc internals and provided a way to have multiple /proc instances mounted in the same PID namespace. This allows for having multiple views of /proc, with different features enabled. (Including the newly added hidepid=4 and subset=pid mount options.) set_fs() removal continues
Christoph Hellwig, with Eric W. Biederman, Arnd Bergmann, and others, have been diligently working to entirely remove the kernel s set_fs() interface, which has long been a source of security flaws due to weird confusions about which address space the kernel thought it should be accessing. Beyond things like the lower-level per-architecture signal handling code, this has needed to touch various parts of the ELF loader, and networking code too. READ_IMPLIES_EXEC is no more for native 64-bit
The READ_IMPLIES_EXEC flag was a work-around for dealing with the addition of non-executable (NX) memory when x86_64 was introduced. It was designed as a way to mark a memory region as well, since we don t know if this memory region was expected to be executable, we must assume that if we need to read it, we need to be allowed to execute it too . It was designed mostly for stack memory (where trampoline code might live), but it would carry over into all mmap() allocations, which would mean sometimes exposing a large attack surface to an attacker looking to find executable memory. While normally this didn t cause problems on modern systems that correctly marked their ELF sections as NX, there were still some awkward corner-cases. I fixed this by splitting READ_IMPLIES_EXEC from the ELF PT_GNU_STACK marking on x86 and arm/arm64, and declaring that a native 64-bit process would never gain READ_IMPLIES_EXEC on x86_64 and arm64, which matches the behavior of other native 64-bit architectures that correctly didn t ever implement READ_IMPLIES_EXEC in the first place. array index bounds checking continues
As part of the ongoing work to use modern flexible arrays in the kernel, Gustavo A. R. Silva added the flex_array_size() helper (as a cousin to struct_size()). The zero/one-member into flex array conversions continue with over a hundred commits as we slowly get closer to being able to build with -Warray-bounds. scnprintf() replacement continues
Chen Zhou joined Takashi Iwai in continuing to replace potentially unsafe uses of sprintf() with scnprintf(). Fixing all of these will make sure the kernel avoids nasty buffer concatenation surprises. That s it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.9.

2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

15 January 2021

Dirk Eddelbuettel: Rcpp 1.0.6: Some Updates

rcpp logo The Rcpp team is proud to announce release 1.0.6 of Rcpp which arrived at CRAN earlier today, and has been uploaded to Debian too. Windows and macOS builds should appear at CRAN in the next few days. This marks the first release on the new six-months cycle announced with release 1.0.5 in July. As reminder, interim dev or rc releases will often be available in the Rcpp drat repo; this cycle there were four. Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 2174 packages on CRAN depend on Rcpp for making analytical code go faster and further (which is an 8.5% increase just since the last release), along with 207 in BioConductor. This release features six different pull requests from five different contributors, mostly fixing fairly small corner cases, plus some minor polish on documentation and continuous integration. Before releasing we once again made numerous reverse dependency checks none of which revealed any issues. So the passage at CRAN was pretty quick despite the large dependency footprint, and we are once again grateful for all the work the CRAN maintainers do.

Changes in Rcpp patch release version 1.0.6 (2021-01-14)
  • Changes in Rcpp API:
    • Replace remaining few uses of EXTPTR_PTR with R_ExternalPtrAddr (Kevin in #1098 fixing #1097).
    • Add push_back and push_front for DataFrame (Walter Somerville in #1099 fixing #1094).
    • Remove a misleading-to-wrong comment (Mattias Ellert in #1109 cleaning up after #1049).
    • Address a sanitizer report by initializing two private bool variables (Benjamin Christoffersen in #1113).
    • External pointer finalizer toggle default values were corrected to true (Dirk in #1115).
  • Changes in Rcpp Documentation:
    • Several URLs were updated to https and/or new addresses (Dirk).
  • Changes in Rcpp Deployment:
    • Added GitHub Actions CI using the same container-based setup used previously, and also carried code coverage over (Dirk in #1128).
  • Changes in Rcpp support functions:
    • Rcpp.package.skeleton() avoids warning from R. (Dirk)

Thanks to my CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2616 previous questions. If you like this or other open-source work I do, you can sponsor me at GitHub. My sincere thanks to my current sponsors for me keeping me caffeinated.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

5 January 2021

Russ Allbery: New year haul

For once, I've already read and reviewed quite a few of these books. Elizabeth Bear Machine (sff)
Timothy Caulfield Your Day, Your Way (non-fiction)
S.A. Chakraborty The City of Brass (sff)
John Dickerson The Hardest Job in the World (non-fiction)
Tracy Deonn Legendborn (sff)
Lindsay Ellis Axiom's End (sff)
Alix E. Harrow The Once and Future Witches (sff)
TJ Klune The House in the Cerulean Sea (sff)
Maria Konnikova The Biggest Bluff (non-fiction)
Talia Levin Culture Warlords (non-fiction)
Yoon Ha Lee Phoenix Extravagent (sff)
Yoon Ha Lee, et al. The Vela (sff)
Michael Lewis Flash Boys (non-fiction)
Michael Lewis Losers (non-fiction)
Michael Lewis The Undoing Project (non-fiction)
Megan Lindholm Wizard of the Pigeons (sff)
Nathan Lowell Quarter Share (sff)
Adrienne Martini Somebody's Gotta Do It (non-fiction)
Tamsyn Muir Princess Florinda and the Forty-Flight Tower (sff)
Naomi Novik A Deadly Education (sff)
Margaret Owen The Merciful Crow (sff)
Anne Helen Peterson Can't Even (non-fiction)
Devon Price Laziness Does Not Exist (non-fiction)
The Secret Barrister The Secret Barrister (non-fiction)
Studs Terkel Working (non-fiction)
Kathi Weeks The Problem with Work (non-fiction)
Reeves Wiedeman Billion Dollar Loser (non-fiction) Rather a lot of non-fiction in this batch, much more than usual. I've been in a non-fiction mood lately. So many good things to read!

26 December 2020

Sandro Tosi: Gmail and tracker.debian.org: add a label to team emails

Recently the Python Team started the migration from Alioth mailing list to a tracker.debian.org team.Alioth mailing list, being a real ml, included a List-Id header, which is handled by Gmail natively and which you can use to create a filter based on it.Tracker.d.o instead uses a custom header, X-Distro-Tracker-Team, and Gmail doesn't allow to create filters on custom headers. But there's a solution: Google Apps Scripts.

1. Apps script and its configuration

To create a new apps script, click "New project" from the scripts dashboard: this will open a code editor, so now cut and paste this code (change the label name to match the one you want to use, and the header too if you're going to use this for another team):

function tagPythonTeamEmails()  

// search for all recent threads (hours is the smallest interval searchable)
var threads = GmailApp.search("newer_than:1h");
// this is the label i want to apply to the messages
var label = GmailApp.getUserLabelByName("Debian_Python_Pkgs"); // <-- EDIT THIS

for (var i = 0; i < threads.length; i++)

// get all messages in a given thread
var messages = threads[i].getMessages();

// for each message in the thread
for (var j = 0; j < messages.length; j++)
var message = messages[j];
// needed to have access to the full email: headers + body
var email = message.getRawContent();
// this is the header i want to check for: if present, add label and archive
if (email.indexOf("X-Distro-Tracker-Team: python") > -1) // <-- EDIT THIS
threads[i].addLabel(label);
threads[i].moveToArchive();





(This code is largely inspired by this StackOverflow answer.)

Given we're going to perform changes to the emails, we need to add extra permissions to the script; first we need to toggle the option "View > Show manifest file" which will show the appsscript.json file. Once done, edit it and update the scope of the script to include:

  "oauthScopes": [
...
"https://www.googleapis.com/auth/gmail.readonly",
"https://www.googleapis.com/auth/gmail.modify"
],

These new permission will be granted when we deploy the script, which we can actually do right now: from the editor window click on "Publish > Deploy from manifest..." and then "Install add-on"; it will make the app script available to operate on Gmail by opening the usual Google grants authorization popup.

2. Schedule the app script execution

Unlike filters defined on the web UI, apps scripts are not triggered every time a mail is received (there's a pub-sub mechanism, but it doesn't look like it would work for this), so if we want to execute periodically the new script, we need to create a time-driven trigger.

The simplest way is probably from the script editor (but you can also follow this doc): "View > Current project's triggers" and then "Add trigger" from there. For my specific use-case i set:

  • event source = time-driven
  • type of time based trigger = minutes timer
  • minute interval = 5 minutes
  • Failure notification settings = notify me immediately
it's not great that you have to wait 5 minutes for getting proper labels, and while it's true you can schedule the trigger for every minute, there are further considerations for that, discussed below.

3. Additional information

Be aware of quotas and limitations for Apps scripts; in particular the "Triggers total runtime" set to "90 min / day" is what led me to set the trigger every 510 mins.

Use the scripts dashboard (doc) to monitor the script execution logs, in the "My Executions" section, and specifically the "Duration" column: the script execution time depends exclusively on the number of emails to process at every execution, so fine-tune the trigger time to stay within your quota.

The newer_than search operator doesn't allow a granularity smaller than hours (although that's not even documented), so that means we're going to scan multiple times the same emails.

UPDATE (2020-12-26): I've already had to bump from every 5 to every 10 minutes, as otherwise i'd get Exception: Service invoked too many times for one day: gmail. (remember the quota!)

Russ Allbery: Review: The Biggest Bluff

Review: The Biggest Bluff, by Maria Konnikova
Publisher: Penguin Press
Copyright: 2020
ISBN: 0-525-52263-8
Format: Kindle
Pages: 335
After a particularly unlucky year for her family, Maria Konnikova was reading about the balance between luck and control in life and discovered, to her surprise, that John von Neumann, one of the foundational thinkers of computer science, was fascinated by poker. He found most card games boring because they relied on luck. Poker, however, he thought was the perfect balance between luck and skill: enough skill to make its effect undeniable, but enough luck that one could not control the game fully regardless of skill. Konnikova decided on a research project: spend one year learning No Limit Texas Hold'em from one of the best poker players in the world, with a goal of competing in the World Series of Poker. She had studied the description-experience gap during her doctoral research in psychology and wanted to see if the experience of randomness in poker would teach her something the description of the randomness of life could not. Before starting this project, she didn't know the basic rules and had never watched a game. Erik Seidel agreed to mentor her, and The Biggest Bluff is her account of that experience. This book is simultaneously frustrating and fascinating in ways that I don't think can be untangled without making it a far different book. Fitting, I suppose, for a book about how our brains entangle luck and skill. First, if you're looking for a book about poker play, this is not one. Konnikova rarely talks about specific hands or tournaments in more detail than her overall trajectory. That was a disappointment. In the few places she does describe some of her betting decisions, analysis of the other players, and tournament strategy, her accounts are engrossing and suspenseful. I would have happily read a book chronicling her poker tournaments and the decisions she made, but this is not that book. What The Biggest Bluff is instead is a psychological and philosophical examination of the process of learning poker. Konnikova uses her experiences as launching points into philosophical digressions. Even the lessons that have limited surface utility outside of poker, such as learning to suppress body language to avoid giving away information, turn into digressions about interpersonal dynamics and personality types. There are interesting tidbits here, but I've read a lot of popular psychology and was more interested in the poker. My frequent reading experience was impatiently waiting for Konnikova to finish lecturing and get back to her narrative. What Konnikova does do though, at a level that I haven't seen before before in a book of this type, is be brutally honest about her mistakes and her learning process. And I do mean brutally: The book opens with her throwing up in a casino bathroom. (This is not a fun book to read if you don't like reading about stress reactions and medical problems. There aren't many of them, but they're... memorable.) Konnikova knows and can explain the psychological state she's trying to reach, but still finds it hard to do in the moment. Correctly reacting to probabilities, cutting losses, and neither being too over-confident or too scared is very hard. Most books of this type elide over the repeated failures in a sort of training montage, which makes the process look easier than it feels. Konnikova tries to realistically show the setbacks and failures, and I think succeeds. That relentless introspection and critical honesty is the best part of this book, but I think it's also behind the stream of consciousness digressions about psychology and philosophy. It's a true portrayal of how Konnikova makes sense of the world. A more polished and streamlined book about the poker would have been more dramatically engrossing, but it might have lost the deep examination of how she combines poker with her knowledge of psychology to change her thinking. The one place where I think that self-reflection may fall a bit short, which I want to mention because I thought it was a missed opportunity, is around knowledge of hand probabilities. Konnikova makes a point, early in the book, of not approaching poker through the memorization and mathematics route and instead trying to find a play style that focuses on analyzing the other players and controlling her own emotions. This is a good hook, but by the end of the book it's not entirely true. The point that I think she was trying to make is that her edge against other poker players at her same level comes more from psychology than from calculation of precise odds in rare situations. This is true. But by the time she reaches high levels of play, she is using statistical simulators, practice tools, and intensive study just like any professional poker player. There is a minimum level of pure knowledge and memorization required that cannot be avoided. It's clear from the few things she says about this that those tools became more interesting to her as she became better at poker, and I wish she would have dug more into why and how that happened. How much of her newfound ability to make decisions and stick to a plan comes from emotional changes, and how much from that background store of confident knowledge? Or maybe those are different ways of looking at the same change? I did appreciate Konnikova's explicit acknowledgment at the end of the book that poker did not, in the end, provide some deep insight into the balance of luck and skill in real life. Learning poker instead gave Konnikova more personal ability to make a plan for the things that she can control and let go of the things she can't. I'm glad that worked for her, but since reading this book I have noticed former poker players who think life is more knowable than it is. Poker combines random chance with psychological play against other people, but it does so in a way and to an extent that is quantifiable. You can make the correct play and still lose a hand, but you can also know when this has happened. When you're used to analyzing the world through that frame and real life fails to provide that certainty, it's tempting to impose it anyway and insist your simplified models are more accurate because they're more comprehensible. But poker is a game, not a model; being more predictable and more constrained than real life is part of what makes it fun. The skill that Konnikova learned from it has a potential downside that she doesn't talk about. I'm not sure how to sum up this book. Konnikova's internal analysis and honesty is truly admirable and illuminating, but it left me wanting to read a different book that was more focused on poker narration. I know there are lots of those books out there, but I doubt they would be written with Konnikova's self-awareness and lack of ego. However, they would probably also lack the moments that made me cringe or that were deeply uncomfortable to read. My feelings are mixed. But if you want popular psychology wrapped around a deeply honest account of the process of learning poker, I suspect this book is one of a kind. Rating: 7 out of 10

22 December 2020

Julien Danjou: I am a Software Engineer and I am in Charge

I am a Software Engineer and I am in ChargeFifteen years have passed since I started my career in IT which is quite some time. I've been playing with computers for 25 years now, which makes me quite knowledgeable about the field, for sure.However, while I was fully prepared to bargain with computers, I was not prepared to do so with humans. The whole career management thing was unknown to me. I had no useful skills to navigate within the enterprise organization. I had to learn the ropes the hard way, failing along the way. It hurt.Almost ten years ago, I had the chance to meet a new colleague Alexis Monville. Alexis was a team facilitator, and I started to work with him on many non-technical levels. He taught me a lot about agility and team organization. Working on this set of new skills changed how I envisioned my work and how I fit into the company.
I am a Software Engineer and I am in ChargeAlexis Monville
I worked on those aspects of my job because I decided to be in charge of my career rather than keeping things boring. That was one of the best decisions I ever made. Growing the social aspect of my profession allowed me to develop and find aspiring jobs and missions.Getting to that point takes a lot of time and effort, and it's pretty hard to do it alone. My friend Alexis wrote an excellent book titled I am a Software Engineer and I am in Charge. I'm proud to have been the first reviewer the book before it was released a few weeks ago.Many developers out there are stuck in a place where they are not excited by their colleagues' work and whose managers do not appropriately recognize their achievement. It would be best for them if they did something about that.
I am a Software Engineer and I am in ChargeThe book!
This book is an excellent piece for engineers who wants to break the cycle of frustration. It covers many situations I encountered across my professional life those last years, giving good insights into how to solve them.To paraphrase Alexis, the answers to your career management problems are not on StackOverflow they're not technical issues. However, you can still solve them with the right tools. That's where I am a Software Engineer and I am in Charge shines. It gives you leads, solutions, and exercise to get out of this kind of situation. It helps increase your impact and satisfaction at work.I love this book, and I wish I had access to it years ago. Developing technical leadership is not easy and requires a mindset shift. Having a way to bootstrap yourself with this is a luxury.If you're a software engineer at the beginning of your career or struggling with your current professional situation, I profoundly recommend reading this book! You'll get a fast track on your career, for sure.
I am a Software Engineer and I am in Charge

20 December 2020

Sven Hoexter: Mock a Serial pty Device with socat

Another note to myself before I forget about this nifty usage of socat again. I was looking for something to mock a serial device, similar to a microcontroller which usually ends up as /dev/ttyACM0 and might output some text. What I found is a very helpful post on stackoverflow showing an example utilizing socat.
$ socat -d -d pty,rawer pty,rawer
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/8
2020/12/20 21:37:53 socat[29130] N PTY is /dev/pts/11
2020/12/20 21:37:53 socat[29130] N starting data transfer loop with FDs [5,5] and [7,7]
Write whatever you need to the second pty, here /dev/pts/11, e.g.
$ i=0; while :; do echo "foo: $ i " > /dev/pts/11; let i++; sleep 5; done
Now you can listen with whatever you like, e.g. some tool you work on, on the fist pty, here /dev/pts/8. For demonstration purpose just use cat:
$ cat /dev/pts/8
foo: 0
foo: 1
socat is an awesome tool, looking through the manpage you need some knowledge about sockets, but it's incredibly vesatile.

18 October 2020

Antoine Beaupr : CDPATH replacements

after reading this post I figured I might as well bite the bullet and improve on my CDPATH-related setup, especially because it does not work with Emacs. so i looked around for autojump-related alternatives that do.

What I use now I currently have this in my .shenv (sourced by .bashrc):
export CDPATH=".:~:~/src:~/dist:~/wikis:~/go/src:~/src/tor"
This allows me to quickly jump into projects from my home dir, or the "source code" (~/src), "work" (src/tor), or wiki checkouts (~/wikis) directories. It works well from the shell, but unfortunately it's very static: if I want a new directory, I need to edit my config file, restart shells, etc. It also doesn't work from my text editor.

Shell jumpers Those are commandline tools that can be used from a shell, generally with built-in shell integration so that a shell alias will find the right directory magically, usually by keeping track of the directories visited with cd. Some of those may or may not have integration in Emacs.

autojump

fasd

z

fzf

Emacs plugins not integrated with the shell Those projects can be used to track files inside a project or find files around directories, but do not offer the equivalent functionality in the shell.

projectile

elpy
  • home page
  • elpy has a notion of projects, so, by default, will find files in the current "project" with C-c C-f which is useful

bookmarks.el
  • built-in
  • home page
  • "Bookmarks record locations so you can return to them later"

recentf
  • built-in
  • home page
  • "builds a list of recently opened files. This list is is automatically saved across sessions on exiting Emacs - you can then access this list through a command or the menu"

references https://www.emacswiki.org/emacs/LocateFilesAnywhere

Next.

Previous.